92 research outputs found

    Presence-Only Geographical Priors for Fine-Grained Image Classification

    Get PDF
    Appearance information alone is often not sufficient to accurately differentiate between fine-grained visual categories. Human experts make use of additional cues such as where, and when, a given image was taken in order to inform their final decision. This contextual information is readily available in many online image collections but has been underutilized by existing image classifiers that focus solely on making predictions based on the image contents. We propose an efficient spatio-temporal prior, that when conditioned on a geographical location and time, estimates the probability that a given object category occurs at that location. Our prior is trained from presence-only observation data and jointly models object categories, their spatio-temporal distributions, and photographer biases. Experiments performed on multiple challenging image classification datasets show that combining our prior with the predictions from image classifiers results in a large improvement in final classification performance

    Responsive and Responsible: Levels of Faculty Encouragement of Civic Engagement

    Get PDF
    This study explores how often faculty members encourage students to engage with campus, local, state, national, and global issues. Using data from the 2013 administration of the Faculty Survey of Student Engagement (FSSE), the results show that faculty members are more likely to encourage students to engage in state, national, or global issues than campus or local issues. Differences in faculty encouragement of civic engagement are also presented across gender, racial/ethnic identification, rank and employment status, and institutional affiliation, among other characteristics. Implications for practice are provided

    When Does Contrastive Visual Representation Learning Work?

    Get PDF
    Recent self-supervised representation learning techniques have largely closed the gap between supervised and unsupervised learning on ImageNet classification. While the particulars of pretraining on ImageNet are now relatively well understood, the field still lacks widely accepted best practices for replicating this success on other datasets. As a first step in this direction, we study contrastive self-supervised learning on four diverse large-scale datasets. By looking through the lenses of data quantity, data domain, data quality, and task granularity, we provide new insights into the necessary conditions for successful self-supervised learning. Our key findings include observations such as: (i) the benefit of additional pretraining data beyond 500k images is modest, (ii) adding pretraining images from another domain does not lead to more general representations, (iii) corrupted pretraining images have a disparate impact on supervised and self-supervised pretraining, and (iv) contrastive learning lags far behind supervised learning on fine-grained visual classification tasks.Comment: CVPR 202

    EasyDig

    Get PDF
    Poster is of the EasyDig that assists those with difficulty bending and kneeling for long periods of time when gardening.https://jdc.jefferson.edu/id/1011/thumbnail.jp

    Multi-Label Learning From Single Positive Labels

    Get PDF
    Predicting all applicable labels for a given image is known as multi-label classification. Compared to the standard multi-class case (where each image has only one label), it is considerably more challenging to annotate training data for multi-label classification. When the number of potential labels is large, human annotators find it difficult to mention all applicable labels for each training image. Furthermore, in some settings detection is intrinsically difficult e.g. finding small object instances in high resolution images. As a result, multi-label training data is often plagued by false negatives. We consider the hardest version of this problem, where annotators provide only one relevant label for each image. As a result, training sets will have only one positive label per image and no confirmed negatives. We explore this special case of learning from missing labels across four different multi-label image classification datasets for both linear classifiers and end-to-end fine-tuned deep networks. We extend existing multi-label losses to this setting and propose novel variants that constrain the number of expected positive labels during training. Surprisingly, we show that in some cases it is possible to approach the performance of fully labeled classifiers despite training with significantly fewer confirmed labels.Comment: CVPR 2021. Supplementary material include

    Benchmarking Representation Learning for Natural World Image Collections

    Get PDF
    Recent progress in self-supervised learning has resulted in models that are capable of extracting rich representations from image collections without requiring any explicit label supervision. However, to date the vast majority of these approaches have restricted themselves to training on standard benchmark datasets such as ImageNet. We argue that fine-grained visual categorization problems, such as plant and animal species classification, provide an informative testbed for self-supervised learning. In order to facilitate progress in this area we present two new natural world visual classification datasets, iNat2021 and NeWT. The former consists of 2.7M images from 10k different species uploaded by users of the citizen science application iNaturalist. We designed the latter, NeWT, in collaboration with domain experts with the aim of benchmarking the performance of representation learning algorithms on a suite of challenging natural world binary classification tasks that go beyond standard species classification. These two new datasets allow us to explore questions related to large-scale representation and transfer learning in the context of fine-grained categories. We provide a comprehensive analysis of feature extractors trained with and without supervision on ImageNet and iNat2021, shedding light on the strengths and weaknesses of different learned features across a diverse set of tasks. We find that features produced by standard supervised methods still outperform those produced by self-supervised approaches such as SimCLR. However, improved self-supervised learning methods are constantly being released and the iNat2021 and NeWT datasets are a valuable resource for tracking their progress.Comment: CVPR 202

    Deflating the shale gas potential of South Africa’s Main Karoo Basin

    Get PDF
    The Main Karoo basin has been identified as a potential source of shale gas (i.e. natural gas that can be extracted via the process of hydraulic stimulation or ‘fracking’). Current resource estimates of 0.4–11x109 m3 (13–390 Tcf) are speculatively based on carbonaceous shale thickness, area, depth, thermal maturity and, most of all, the total organic carbon content of specifically the Ecca Group’s Whitehill Formation with a thickness of more than 30 m. These estimates were made without any measurements on the actual available gas content of the shale. Such measurements were recently conducted on samples from two boreholes and are reported here. These measurements indicate that there is little to no desorbed and residual gas, despite high total organic carbon values. In addition, vitrinite reflectance and illite crystallinity of unweathered shale material reveal the Ecca Group to be metamorphosed and overmature. Organic carbon in the shale is largely unbound to hydrogen, and little hydrocarbon generation potential remains. These findings led to the conclusion that the lowest of the existing resource estimates, namely 0.4x109 m3 (13 Tcf), may be the most realistic. However, such low estimates still represent a large resource with developmental potential for the South African petroleum industry. To be economically viable, the resource would be required to be confined to a small, well-delineated ‘sweet spot’ area in the vast southern area of the basin. It is acknowledged that the drill cores we investigated fall outside of currently identified sweet spots and these areas should be targets for further scientific drilling projects. Significance:  • This is the first report of direct measurements of the actual gas contents of southern Karoo basin shales. • The findings reveal carbon content of shales to be dominated by overmature organic matter. • The results demonstrate a much reduced potential shale gas resource presented by the Whitehill Formation
    • …
    corecore